research task
Towards Personalized Deep Research: Benchmarks and Evaluations
Liang, Yuan, Li, Jiaxian, Wang, Yuqing, Wang, Piaohong, Tian, Motong, Liu, Pai, Qiao, Shuofei, Fang, Runnan, Zhu, He, Zhang, Ge, Liu, Minghao, Jiang, Yuchen Eleanor, Zhang, Ningyu, Zhou, Wangchunshu
Deep Research Agents (DRAs) can autonomously conduct complex investigations and generate comprehensive reports, demonstrating strong real-world potential. However, existing benchmarks primarily evaluate DRAs on generic quality metrics and overlook personalization, a critical dimension for individual users. However, existing evaluations mostly rely on close-ended benchmarks, while open-ended deep research benchmarks remain scarce and typically neglect personalized scenarios. To bridge this gap, we introduce Personalized Deep Research Bench (PDR-Bench), the first benchmark for evaluating personalization in DRAs. It pairs 50 diverse research tasks across 10 domains with 25 authentic user profiles that combine structured persona attributes with dynamic real-world contexts, yielding 250 realistic user-task queries. To assess system performance, we propose the PQR Evaluation Framework, which jointly measures Personalization Alignment, Content Quality, and Factual Reliability. Our experiments on a range of systems highlight current capabilities and limitations in handling personalized deep research. This work establishes a rigorous foundation for developing and evaluating the next generation of truly personalized AI research assistants.
- Health & Medicine (0.67)
- Information Technology > Security & Privacy (0.46)
InnovatorBench: Evaluating Agents' Ability to Conduct Innovative LLM Research
Wu, Yunze, Fu, Dayuan, Si, Weiye, Huang, Zhen, Jiang, Mohan, Li, Keyu, Xia, Shijie, Sun, Jie, Xu, Tianze, Hu, Xiangkun, Lu, Pengrui, Cai, Xiaojie, Ye, Lyumanshan, Zhu, Wenhong, Xiao, Yang, Liu, Pengfei
AI agents could accelerate scientific discovery by automating hypothesis formation, experiment design, coding, execution, and analysis, yet existing benchmarks probe narrow skills in simplified settings. To address this gap, we introduce InnovatorBench, a benchmark-platform pair for realistic, end-to-end assessment of agents performing Large Language Model (LLM) research. It comprises 20 tasks spanning Data Construction, Filtering, Augmentation, Loss Design, Reward Design, and Scaffold Construction, which require runnable artifacts and assessment of correctness, performance, output quality, and uncertainty. To support agent operation, we develop ResearchGym, a research environment offering rich action spaces, distributed and long-horizon execution, asynchronous monitoring, and snapshot saving. We also implement a lightweight ReAct agent that couples explicit reasoning with executable planning using frontier models such as Claude-4, GPT-5, GLM-4.5, and Kimi-K2. Our experiments demonstrate that while frontier models show promise in code-driven research tasks, they struggle with fragile algorithm-related tasks and long-horizon decision making, such as impatience, poor resource management, and overreliance on template-based reasoning. Furthermore, agents require over 11 hours to achieve their best performance on InnovatorBench, underscoring the benchmark's difficulty and showing the potential of InnovatorBench to be the next generation of code-based research benchmark.
InternAgent: When Agent Becomes the Scientist -- Building Closed-Loop System from Hypothesis to Verification
InternAgent Team, null, Zhang, Bo, Feng, Shiyang, Yan, Xiangchao, Yuan, Jiakang, Ma, Runmin, Hu, Yusong, Yu, Zhiyin, He, Xiaohan, Huang, Songtao, Hou, Shaowei, Nie, Zheng, Wang, Zhilong, Liu, Jinyao, Peng, Tianshuo, Ye, Peng, Zhou, Dongzhan, Zhang, Shufei, Wang, Xiaosong, Zhang, Yilan, Li, Meng, Tu, Zhongying, Yue, Xiangyu, Ouyang, Wangli, Zhou, Bowen, Bai, Lei
Artificial Intelligence (AI) is accelerating the transformation of scientific research paradigms, not only enhancing research efficiency but also driving innovation. We introduce InternAgent, a unified closed-loop multi-agent framework to conduct Autonomous Scientific Research (ASR) across various scientific research fields, enabling researchers to tackle complicated problems in these fields with unprecedented speed and precision. InternAgent highlights three key advantages: 1) Scalability: InternAgent has demonstrated its versatility across 12 scientific research tasks, capable of generating innovative ideas to enhance the performance of baseline code. 2) Interactivity: InternAgent provides an interface for human expert feedback and multi-agent interaction in automated end-to-end processes, allowing for the seamless integration of domain expert knowledge. 3) Efficiency: InternAgent has achieved promising performance gains in several scientific fields with significantly less time cost compared to human efforts. For instance, in reaction yield prediction, it increased from 27.6% to 35.4% in just 12 hours; in enhancer activity prediction, accuracy rose from 0.65 to 0.79 with only 4 hours of processing; and in 2D semantic segmentation, precision advanced from 78.8% to 81.0% in a mere 30 hours.
Open Source Planning & Control System with Language Agents for Autonomous Scientific Discovery
Xu, Licong, Sarkar, Milind, Lonappan, Anto I., Zubeldia, Íñigo, Villanueva-Domingo, Pablo, Casas, Santiago, Fidler, Christian, Amancharla, Chetana, Tiwari, Ujjwal, Bayer, Adrian, Ekioui, Chadi Ait, Cranmer, Miles, Dimitrov, Adrian, Fergusson, James, Gandhi, Kahaan, Krippendorf, Sven, Laverick, Andrew, Lesgourgues, Julien, Lewis, Antony, Meier, Thomas, Sherwin, Blake, Surrao, Kristen, Villaescusa-Navarro, Francisco, Wang, Chi, Xu, Xueqing, Bolliet, Boris
We present a multi-agent system for automation of scientific research tasks, cmbagent (https://github.com/CMBAgents/cmbagent). The system is formed by about 30 Large Language Model (LLM) agents and implements a Planning & Control strategy to orchestrate the agentic workflow, with no human-in-the-loop at any point. Each agent specializes in a different task (performing retrieval on scientific papers and codebases, writing code, interpreting results, critiquing the output of other agents) and the system is able to execute code locally. We successfully apply cmbagent to carry out a PhD level cosmology task (the measurement of cosmological parameters using supernova data) and evaluate its performance on two benchmark sets, finding superior performance over state-of-the-art LLMs. The source code is available on GitHub, demonstration videos are also available, and the system is deployed on HuggingFace and will be available on the cloud.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.15)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.05)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- (8 more...)
- Workflow (0.68)
- Research Report (0.50)
Get on the Train or be Left on the Station: Using LLMs for Software Engineering Research
Trinkenreich, Bianca, Calefato, Fabio, Hanssen, Geir, Blincoe, Kelly, Kalinowski, Marcos, Pezzè, Mauro, Tell, Paolo, Storey, Margaret-Anne
The adoption of Large Language Models (LLMs) is not only transforming software engineering (SE) practice but is also poised to fundamentally disrupt how research is conducted in the field. While perspectives on this transformation range from viewing LLMs as mere productivity tools to considering them revolutionary forces, we argue that the SE research community must proactively engage with and shape the integration of LLMs into research practices, emphasizing human agency in this transformation. As LLMs rapidly become integral to SE research - both as tools that support investigations and as subjects of study - a human-centric perspective is essential. Ensuring human oversight and interpretability is necessary for upholding scientific rigor, fostering ethical responsibility, and driving advancements in the field. Drawing from discussions at the 2nd Copenhagen Symposium on Human-Centered AI in SE, this position paper employs McLuhan's Tetrad of Media Laws to analyze the impact of LLMs on SE research. Through this theoretical lens, we examine how LLMs enhance research capabilities through accelerated ideation and automated processes, make some traditional research practices obsolete, retrieve valuable aspects of historical research approaches, and risk reversal effects when taken to extremes. Our analysis reveals opportunities for innovation and potential pitfalls that require careful consideration. We conclude with a call to action for the SE research community to proactively harness the benefits of LLMs while developing frameworks and guidelines to mitigate their risks, to ensure continued rigor and impact of research in an AI-augmented future.
- Europe > Denmark > Capital Region > Copenhagen (0.25)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.05)
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.04)
- (5 more...)
- Research Report > New Finding (0.69)
- Research Report > Experimental Study (0.47)
DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents
Du, Mingxuan, Xu, Benfeng, Zhu, Chiwei, Wang, Xiaorui, Mao, Zhendong
Deep Research Agents are a prominent category of LLM-based agents. By autonomously orchestrating multistep web exploration, targeted retrieval, and higher-order synthesis, they transform vast amounts of online information into analyst-grade, citation-rich reports--compressing hours of manual desk research into minutes. However, a comprehensive benchmark for systematically evaluating the capabilities of these agents remains absent. To bridge this gap, we present DeepResearch Bench, a benchmark consisting of 100 PhD-level research tasks, each meticulously crafted by domain experts across 22 distinct fields. Evaluating DRAs is inherently complex and labor-intensive. We therefore propose two novel methodologies that achieve strong alignment with human judgment. The first is a reference-based method with adaptive criteria to assess the quality of generated research reports. The other framework is introduced to evaluate DRA's information retrieval and collection capabilities by assessing its effective citation count and overall citation accuracy. We have open-sourced DeepResearch Bench and key components of these frameworks at https://github.com/Ayanami0730/deep_research_bench to accelerate the development of practical LLM-based agents.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- North America > Canada > Newfoundland and Labrador > Labrador (0.04)
- (2 more...)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.34)
- Transportation > Ground > Road (0.46)
- Transportation > Electric Vehicle (0.46)
- Automobiles & Trucks (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.95)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.93)
MLGym: A New Framework and Benchmark for Advancing AI Research Agents
Nathani, Deepak, Madaan, Lovish, Roberts, Nicholas, Bashlykov, Nikolay, Menon, Ajay, Moens, Vincent, Budhiraja, Amar, Magka, Despoina, Vorotilov, Vladislav, Chaurasia, Gaurav, Hupkes, Dieuwke, Cabral, Ricardo Silveira, Shavrina, Tatiana, Foerster, Jakob, Bachrach, Yoram, Wang, William Yang, Raileanu, Roberta
We introduce Meta MLGym and MLGym-Bench, a new framework and benchmark for evaluating and developing LLM agents on AI research tasks. This is the first Gym environment for machine learning (ML) tasks, enabling research on reinforcement learning (RL) algorithms for training such agents. MLGym-bench consists of 13 diverse and open-ended AI research tasks from diverse domains such as computer vision, natural language processing, reinforcement learning, and game theory. Solving these tasks requires real-world AI research skills such as generating new ideas and hypotheses, creating and processing data, implementing ML methods, training models, running experiments, analyzing the results, and iterating through this process to improve on a given task. We evaluate a number of frontier large language models (LLMs) on our benchmarks such as Claude-3.5-Sonnet, Llama-3.1 405B, GPT-4o, o1-preview, and Gemini-1.5 Pro. Our MLGym framework makes it easy to add new tasks, integrate and evaluate models or agents, generate synthetic data at scale, as well as develop new learning algorithms for training agents on AI research tasks. We find that current frontier models can improve on the given baselines, usually by finding better hyperparameters, but do not generate novel hypotheses, algorithms, architectures, or substantial improvements. We open-source our framework and benchmark to facilitate future research in advancing the AI research capabilities of LLM agents.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > Middle East > Jordan (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Workflow (0.93)
- Leisure & Entertainment > Games (1.00)
- Health & Medicine (1.00)
- Education > Curriculum > Subject-Specific Education (0.46)
AAAR-1.0: Assessing AI's Potential to Assist Research
Lou, Renze, Xu, Hanzi, Wang, Sijia, Du, Jiangshu, Kamoi, Ryo, Lu, Xiaoxin, Xie, Jian, Sun, Yuxuan, Zhang, Yusen, Ahn, Jihyun Janice, Fang, Hongchao, Zou, Zhuoyang, Ma, Wenchao, Li, Xi, Zhang, Kai, Xia, Congying, Huang, Lifu, Yin, Wenpeng
Numerous studies have assessed the proficiency of AI systems, particularly large language models (LLMs), in facilitating everyday tasks such as email writing, question answering, and creative content generation. However, researchers face unique challenges and opportunities in leveraging LLMs for their own work, such as brainstorming research ideas, designing experiments, and writing or reviewing papers. In this study, we introduce AAAR-1.0, a benchmark dataset designed to evaluate LLM performance in three fundamental, expertise-intensive research tasks: (i) EquationInference, assessing the correctness of equations based on the contextual information in paper submissions; (ii) ExperimentDesign, designing experiments to validate research ideas and solutions; (iii) PaperWeakness, identifying weaknesses in paper submissions; and (iv) REVIEWCRITIQUE, identifying each segment in human reviews is deficient or not. AAAR-1.0 differs from prior benchmarks in two key ways: first, it is explicitly research-oriented, with tasks requiring deep domain expertise; second, it is researcher-oriented, mirroring the primary activities that researchers engage in on a daily basis. An evaluation of both open-source and proprietary LLMs reveals their potential as well as limitations in conducting sophisticated research tasks. We will keep iterating AAAR-1.0 to new versions.
- Europe > Ukraine > Kyiv Oblast > Kyiv (0.04)
- North America > United States > Pennsylvania (0.04)
- North America > United States > Ohio (0.04)
- (5 more...)
A FAIR and Free Prompt-based Research Assistant
Shamsabadi, Mahsa, D'Souza, Jennifer
This demo will present the Research Assistant (RA) tool developed to assist with six main types of research tasks defined as standardized instruction templates, instantiated with user input, applied finally as prompts to well-known--for their sophisticated natural language processing abilities--AI tools, such as ChatGPT (https://chat.openai.com/) and Gemini (https://gemini.google.com/app). The six research tasks addressed by RA are: creating FAIR research comparisons, ideating research topics, drafting grant applications, writing scientific blogs, aiding preliminary peer reviews, and formulating enhanced literature search queries. RA's reliance on generative AI tools like ChatGPT or Gemini means the same research task assistance can be offered in any scientific discipline. We demonstrate its versatility by sharing RA outputs in Computer Science, Virology, and Climate Science, where the output with the RA tool assistance mirrored that from a domain expert who performed the same research task.
- Research Report (0.52)
- Workflow (0.50)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.54)
A Survey of Embodied AI: From Simulators to Research Tasks
Duan, Jiafei, Yu, Samson, Tan, Hui Li, Zhu, Hongyuan, Tan, Cheston
There has been an emerging paradigm shift from the era of "internet AI" to "embodied AI", whereby AI algorithms and agents no longer simply learn from datasets of images, videos or text curated primarily from the internet. Instead, they learn through embodied physical interactions with their environments, whether real or simulated. Consequently, there has been substantial growth in the demand for embodied AI simulators to support a diversity of embodied AI research tasks. This growing interest in embodied AI is beneficial to the greater pursuit of artificial general intelligence, but there is no contemporary and comprehensive survey of this field. This paper comprehensively surveys state-of-the-art embodied AI simulators and research, mapping connections between these. By benchmarking nine state-of-the-art embodied AI simulators in terms of seven features, this paper aims to understand the simulators in their provision for use in embodied AI research. Finally, based upon the simulators and a pyramidal hierarchy of embodied AI research tasks, this paper surveys the main research tasks in embodied AI -- visual exploration, visual navigation and embodied question answering (QA), covering the state-of-the-art approaches, evaluation and datasets.
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Asia > Singapore > Central Region > Singapore (0.04)
- Overview (1.00)
- Research Report (0.83)
- Leisure & Entertainment > Games (1.00)
- Education (0.67)